2 research outputs found

    Automated data-collection for personalized facial expression recognition in human-robot interaction

    Get PDF
    Face recognition systems, which attempt to identify the emotions that a person is feeling, have been around for quite some time. Facial expression recognition is the technique of detecting facial expressions based on interpretations of patterns in a picture. Because every person's face is unique, when we apply these methods to pictures of people, we are able to identify their facial expressions as being unique. In this research, we build a web-based data collecting application that is completely automated and includes a virtual avatar to guide users through the procedure. The input data we dealt with included written input in the form of six emotions (anger, disgust, fear, happiness, surprise, and sorrow) plus neutral, as well as video footage with a length of 20 seconds for each. With the use of the data, a customized face expression recognition method based on deep learning architecture known as MobileNets would be develope

    Integration of 2D Textural and 3D Geometric Features for Robust Facial Expression Recognition

    Get PDF
    Recognition of facial expressions is critical for successful social interactions and relationships. Facial expressions transmit emotional information, which is critical for human-machine interaction; therefore, significant research in computer vision has been conducted, with promising findings in using facial expression detection in both academia and industry. 3D pictures acquired enormous popularity owing to their ability to overcome some of the constraints inherent in 2D imagery, such as lighting and variation. We present a method for recognizing facial expressions in this article by combining features extracted from 2D textured pictures and 3D geometric data using the Local Binary Pattern (LBP) and the 3D Voxel Histogram of Oriented Gradients (3DVHOG), respectively. We performed various pre-processing operations using the MDPA-FACE3D and Bosphorus datasets, then we carried out classification process to classify images into seven universal emotions, namely anger, disgust, fear, happiness, sadness, neutral, and surprise. Using Support Vector Machine classifier, we achieved the accuracy of 88.5 % and 92.9 % on the MDPA-FACE3D and the Bosphorus datasets, respectively
    corecore